-
Notifications
You must be signed in to change notification settings - Fork 298
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
more stable autocomplete #442
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice! I like the changes. Some comments inline. I think with this approach, the LRUCache map doesn't really make sense as a data-structure. I think the way we use it, we also don't count when a cached completion is retrieved so this behaves more like a fifo queue now where an array might be enough?
The changes look good though. I would argue that it's not simpler though, there's more custom behavior that the cache now implements (previously it conceptually was just a map). However, the implementation is easier - the map lookup was definitely a premature optimization from my side!
@philipp-spiess Thank you for the comments! All great points. And after sleeping on it, I see a bunch more places where it needs work. Will keep this in draft and keep working on it. |
8491fd5
to
cb7484a
Compare
cb7484a
to
4598931
Compare
@philipp-spiess I think we can significantly simplify the cache from what I have here and use VS Code's builtin "reuse" logic more. Stay tuned... |
0574363
to
4f13667
Compare
4f13667
to
f52b135
Compare
This just makes it harder to debug. VS Code catches errors at a higher level anyway. TODO See #442 (comment)
This just makes it harder to debug. VS Code catches errors at a higher level anyway. TODO See #442 (comment)
facee19
to
1cecf95
Compare
This just makes it harder to debug. VS Code catches errors at a higher level anyway. TODO See #442 (comment)
d320ecd
to
59bdab1
Compare
369992c
to
7931ba6
Compare
This just makes it harder to debug. VS Code catches errors at a higher level anyway. TODO See #442 (comment)
2dac20b
to
a65938a
Compare
Note: This PR is based on the following other PRs that will be merged independently (then this PR branch will be rebased): |
This is great, thanks. I was chatting with @vovakulikov yesterday and came to the same conclusion that these are two different scenarios and wanted to post a PR today to do the same 😅 |
@sqs This feels very good to use now, wow! I noticed one case where it's a bit funky still. If you see ghost text and press space, the previous ghost text will stay visible but we do fire off a new request that will eventually replace the visible text, causing some UI churn: Screen.Recording.2023-08-01.at.11.40.03.mov |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a great clean up! I left two inline comments + the main comment in the PR about the remaining "jitter" case (the latter is minor and I’m fine with fixing it later)
it('serves request from cache when a prior request resolves', async () => { | ||
const prefix1 = 'console.' | ||
const provider1 = createProvider(prefix1) | ||
const promise1 = createRequest(prefix1, provider1) | ||
|
||
const prefix2 = 'console.log(' | ||
const provider2 = createProvider(prefix2) | ||
const promise2 = createRequest(prefix2, provider2) | ||
|
||
provider1.resolveRequest(["log('hello')"]) | ||
|
||
expect((await promise1)[0].content).toBe("log('hello')") | ||
expect((await promise2)[0].content).toBe("'hello')") | ||
|
||
expect(provider1.didFinishNetworkRequest).toBe(true) | ||
expect(provider2.didFinishNetworkRequest).toBe(false) | ||
|
||
// Ensure that the completed network request does not cause issues | ||
provider2.resolveRequest(["'world')"]) | ||
}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did you remove this logic on purpose? Looking at the logs, it does trigger on almost ~10% of requests: https://sourcegraph.looker.com/x/PEfxGviZvYk8tmvTFGIDVr
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thinking about this more, I guess the issue is that the new network caching is no longer able to synthesise completions. Hmm.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I did remove it for that reason. I did not realize it triggered in ~10% of requests. That is substantial. Do you think this retest-cache behavior is important to preserve for perf? I am not confident in my intuition about the impact from reading the code just now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@sqs My intuition is yes. It was also the main motivation for not cancelling inflight requests when a new completion is triggered.
I’m less confident on wether it had a big impact on the overall metrics (something had in the period this rolled out, but we tweaked a bunch of different things) however I could definitely notice the change in my IDE, especially when typing because sometimes you just get immediate completion results and it feels magical.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll keep an eye on the metrics and if the latency regresses we know what to do 👍
9bc8992
to
9c9ae5a
Compare
@philipp-spiess Thanks! I will address the remaining things and get this merged. |
3048ea0
to
d2b3b0b
Compare
d2b3b0b
to
a731012
Compare
a731012
to
5a5a7c7
Compare
@philipp-spiess Thanks! I just fixed the issue you raised at #442 (comment) and added a lot more tests for the typing-as-suggested-by-last-candidate case. |
Makes autocomplete not jitter as much as you type, with a cache and a few other techniques to help increase the (1) cache hit rate and (2) VS Code's ability to reuse and not invalidate existing completion results.
Why this change?
getInlineCompletions
so we can separately test it and theInlineCompletionItemProvider
logic.getInlineCompletions
(and all pass without modification in substance).logId
rewriting or retesting caches for a document (see below).logId
, and we can do so simply instead of (if the 2 cases are conflated) requiring the cache to supportlogId
rewriting.Examples of bugs or undesirable behaviors this fixes:
Test plan
Use completions and watch the logIds. Try typing or deleting over ghost text.